11 research outputs found

    Data Analytics in Web-based Education in the Higher-education Classroom

    No full text
    Attention span of students in a classroom is very short. To overcome this, different active learning methodologies have been used in the past. Active learning keeps the students busy and engaged throughout the lecture. It breaks the lecture into certain time intervals by intermixing breaks, demonstrations and questions after each interval. For using active learning, clickers and laptops are commonly used in higher education classroom. Most experiments in higher education classroom studying different characteristics of students like learning performance and attention, use clickers and laptop. But, most of these experiments are in a controlled setting, not scalable and compromise the privacy of students. We overcome these problems in an active learning setup in the higher education classroom where we use a web-mediated teaching tool called ASQ. ASQ is a web application that helps to give presentation in a classroom where the presenter has control over the flow of the presentation. ASQ also allows the presenter to interleave the presentation with questions, videos and other interactive JavaScript components. Anyone can anonymously join a presentation in ASQ using a web browser. ASQ tracks the activity of every student interaction by generating event logs each second. In the previous work using ASQ, it has been shown that these logs could be used to infer the attention level of students in the classroom. The goal of this thesis is to gather insights about the fine-grained study behaviour of students in a higher education classroom by analyzing these event logs.We investigate (i) the effect of lecture elements (like the difficulty, relative positioning and spacing of questions; and duration of discussion in the slides) on study behaviour (like attention level, performance and reaction time while answering questions) of students; (ii) the relationship that might exist between attention percentage of students and their participation in the in-class questions; (iii) if students are taking external help when answering questions during the lecture and the relationship that might exist between their tendency to take external help with the difficulty of questions. We conduct our study in a classroom of around 300 students, for 15 lectures in the Web and Database Technology course at TU Delft taught by 2 instructors. We find significant effect of (i) spacing of questions on reaction time and instructor on performance; (ii) length of discussion time associated with a slide on the attention level of students which agrees with past studies; (iii) relative positioning of questions on the performance of students. However, we do not find significant effect of difficulty of questions on performance and reaction time of students while answering these questions. We also find significant effect that students with more attention percentage participate more in the in-class questions. Finally, we find that students take external help while answering questions but the tendency to take external help does not depend on the difficulty of questions

    Literature Review on Co-Located Collaboration Modeling Using Multimodal Learning Analytics—Can We Go the Whole Nine Yards?

    No full text
    Collaboration is one of the important 21st-century skills. It can take place in remote or co-located settings. Co-located collaboration (CC) is a very complex process that involves subtle human interactions that can be described with indicators like eye gaze, speaking time, pitch, and social skills from different modalities. With the advent of sensors, multimodal learning analytics has gained momentum to detect CC quality. Indicators (or low-level events) can be used to detect CC quality with the help of measurable markers (i.e., indexes composed of one or more indicators) which give the high-level collaboration process definition. However, this understanding is incomplete without considering the scenarios (such as problem solving or meetings) of CC. The scenario of CC affects the set of indicators considered: For instance, in collaborative programming, grabbing the mouse from the partner is an indicator of collaboration; whereas in collaborative meetings, eye gaze, and audio level are indicators of collaboration. This can be a result of the differing goals and fundamental parameters (such as group behavior, interaction, or composition) in each scenario. In this article, we present our work on profiles of indicators on the basis of a scenario-driven prioritization, the parameters in different CC scenarios are mapped onto the indicators and the available indexes. This defines the conceptual model to support the design of a CC quality detection and prediction system.Web Information System

    Towards automatic collaboration analytics for group speech data using learning analytics

    Get PDF
    Collaboration is an important 21st Century skill. Co-located (or face-to-face) collaboration (CC) analytics gained momentum with the advent of sensor technology. Most of these works have used the audio modality to detect the quality of CC. The CC quality can be detected from simple indicators of collaboration such as total speaking time or complex indicators like synchrony in the rise and fall of the average pitch. Most studies in the past focused on “how group members talk” (i.e., spectral, temporal features of audio like pitch) and not “what they talk”. The “what” of the conversations is more overt contrary to the “how” of the conversations. Very few studies studied “what” group members talk about, and these studies were lab based showing a representative overview of specific words as topic clusters instead of analysing the richness of the content of the conversations by understanding the linkage between these words. To overcome this, we made a starting step in this technical paper based on field trials to prototype a tool to move towards automatic collaboration analytics. We designed a technical setup to collect, process and visualize audio data automatically. The data collection took place while a board game was played among the university staff with pre-assigned roles to create awareness of the connection between learning analytics and learning design. We not only did a word-level analysis of the conversations, but also analysed the richness of these conversations by visualizing the strength of the linkage between these words and phrases interactively. In this visualization, we used a network graph to visualize turn taking exchange between different roles along with the word-level and phrase-level analysis. We also used centrality measures to understand the network graph further based on how much words have hold over the network of words and how influential are certain words. Finally, we found that this approach had certain limitations in terms of automation in speaker diarization (i.e., who spoke when) and text data pre-processing. Therefore, we concluded that even though the technical setup was partially automated, it is a way forward to understand the richness of the conversations between different roles and makes a significant step towards automatic collaboration analytics

    Towards Collaborative Convergence: Quantifying Collaboration Quality with Automated Co-located Collaboration Analytics

    No full text
    Collaboration is one of the four important 21st-century skills. With the pervasive use of sensors, interest on co-located collaboration (CC) has increased lately. Most related literature used the audio modality to detect indicators of collaboration (such as total speaking time and turn taking). CC takes place in physical spaces where group members share their social (i.e., non-verbal audio indicators like speaking time, gestures) and epistemic space (i.e., verbal audio indicators like the content of the conversation). Past literature has mostly focused on the social space to detect the quality of collaboration. In this study, we focus on both social and epistemic space with an emphasis on the epistemic space to understand different evolving collaboration patterns and collaborative convergence and quantify collaboration quality. We conduct field trials by collecting audio recordings in 14 different sessions in a university setting while the university staff and students collaborate over playing a board game to design a learning activity. This collaboration task consists of different phases with each collaborating member having been assigned a pre-fixed role. We analyze the collected group speech data to do role-based profiling and visualize it with the help of a dashboard. Web Information System

    Developing AI into explanatory supporting models:An explanation-visualized deep learning prototype

    No full text
    Abstract Using Artificial Intelligence (AI) and machine learning technologies to automatically mine latent patterns from educational data holds great potential to inform teaching and learning practices. However, the current AI technology mostly works as “black box” — only the inputs and the corresponding outputs are available, which largely impedes researchers from gaining access to explainable feedback. This interdisciplinary work presents an explainable AI prototype with visualized explanations as feedback for computer-supported collaborative learning (CSCL). This research study seeks to provide interpretable insights with machine learning technologies for multimodal learning analytics (MMLA) by introducing two different explanatory machine learning-based models (neural network and Bayesian network) in different manners (end-to-end learning and probabilistic analysis) and for the same goal — provide explainable and actionable feedback. The prototype is applied to the real-world collaborative learning scenario with data-driven learning based on sensor-data from multiple modalities which can assess collaborative learning processes and render explanatory real-time feedback
    corecore